223 research outputs found

    Tailored retrieval of health information from the web for facilitating communication and empowerment of elderly people

    Get PDF
    A patient, nowadays, acquires health information from the Web mainly through a “human-to-machine” communication process with a generic search engine. This, in turn, affects, positively or negatively, his/her empowerment level and the “human-to-human” communication process that occurs between a patient and a healthcare professional such as a doctor. A generic communication process can be modelled by considering its syntactic-technical, semantic-meaning, and pragmatic-effectiveness levels and an efficacious communication occurs when all the communication levels are fully addressed. In the case of retrieval of health information from the Web, although a generic search engine is able to work at the syntactic-technical level, the semantic and pragmatic aspects are left to the user and this can be challenging, especially for elderly people. This work presents a custom search engine, FACILE, that works at the three communication levels and allows to overcome the challenges confronted during the search process. A patient can specify his/her information requirements in a simple way and FACILE will retrieve the “right” amount of Web content in a language that he/she can easily understand. This facilitates the comprehension of the found information and positively affects the empowerment process and communication with healthcare professionals

    Detección de anomalías con Prometheus

    Get PDF
    Prometheus es una aplicación de uso extendido para monitorizar sistemas de Kubernetes. No obstante, no provee una solución adecuada para detectar anomalías complejas. En este trabajo se habla, en primer lugar, del despliegue de un sistema de Kubernetes que usa Kafka y la implementación de un microservicio para producir anomalías. Con ello, se genera un conjunto etiquetado de datos extraídos de Prometheus.El conjunto de datos producido se puede utilizar para desarrollar un algoritmo de aprendizaje automático para detectar anomalías. Adicionalmente, el estudio explica las herramientas para entender el conjunto de datos y cómo utilizarlo para desarrollar un plug-in que prediga anomalías y accione alarmas en Prometheus Alertmanager cuando sean detectadas.<br /

    Technical Debt Prioritization: State of the Art. A Systematic Literature Review

    Get PDF
    Background. Software companies need to manage and refactor Technical Debt issues. Therefore, it is necessary to understand if and when refactoring Technical Debt should be prioritized with respect to developing features or fixing bugs. Objective. The goal of this study is to investigate the existing body of knowledge in software engineering to understand what Technical Debt prioritization approaches have been proposed in research and industry. Method. We conducted a Systematic Literature Review among 384 unique papers published until 2018, following a consolidated methodology applied in Software Engineering. We included 38 primary studies. Results. Different approaches have been proposed for Technical Debt prioritization, all having different goals and optimizing on different criteria. The proposed measures capture only a small part of the plethora of factors used to prioritize Technical Debt qualitatively in practice. We report an impact map of such factors. However, there is a lack of empirical and validated set of tools. Conclusion. We observed that technical Debt prioritization research is preliminary and there is no consensus on what are the important factors and how to measure them. Consequently, we cannot consider current research conclusive and in this paper, we outline different directions for necessary future investigations

    Towards a trustworthness model for Open Source software.

    Get PDF
    Trustworthiness is one of the main aspects that contribute to the adoption/rejection of a software product. This is actually true for any product in general, but it is especially true for Open Source Software (OSS), whose trustworthiness is sometimes still regarded as not as guaranteed as that of closed source products. Only recently, several industrial software organizations have started investigating the potential of OSS products as users or even producers. As they are now getting more and more involved in the OSS world, these software organizations are clearly interested in ways to assess the trustworthiness of OSS products, so as to choose OSS products that are adequate for their goals and needs. Trustworthiness is a major issue when people and organizations are faced with the selection and the adoption of new software. Although some ad-hoc methods have been proposed, there is not yet general agreement about which software characteristics contribute to trustworthiness. Such methods \u2013like the OpenBQR [30] and other similar approaches [58][59]\u2013 assess the trustworthiness of a software product by means of a weighted sum of specific quality evaluations. None of the existing methods based on weighted sums has been widely adopted. In fact, these methods are limited in that they typically leave the user with two hard problems, which are common to models built by means of weighted sums: identify the factors that should be taken into account, and assign to each of these factors the \u201ccorrect\u201d weight to adequately quantify its relative importance. Therefore, this work focuses on defining an adequate notion of trustworthiness of Open Source products and artifacts and identifying a number of factors that influence it to help and guide both developers and users when deciding whether a given program (or library or other piece of software) is \u201cgood enough\u201d and can be trusted in order to be used in an industrial or professional context. The result of this work is a set of estimation models for the perceived trustworthiness of OSS. This work has been carried out in the context of the IST project QualiPSo (http://www.qualipso.eu/), funded by the EU in the 6th FP (IST-034763). The first step focuses on defining an adequate notion of trustworthiness of software products and artifacts and identifying a number of factors that influence it. The definition of the trustworthiness factors is driven by specific business goals for each organization. So, we carried out a survey to elicit these goals and factors directly from industrial players, trying to derive the factors from the real user needs instead of deriving them from our own personal beliefs and/or only by reading the available literature. The questions in the questionnaire were mainly classified in three different categories: 1) Organization, project, and role. 2) Actual problems, actual trustworthiness evaluation processes, and factors. 3) Wishes. These questions are needed to understand what information should be available but is not, and what indicators should be provided for an OSS product to help its adoption. To test the applicability of the trustworthiness factors identified by means of the questionnaires, we selected a set of OSS projects, widely adopted and generally considered trustable, to be used as references. Afterwards, a first quick analysis was carried out, to check which factors were readily available on each project\u2019s web site. The idea was to emulate the search for information carried out by a potential user, who browses the project\u2019s web sites, but is not willing to spend too much effort and time in carrying out a complete analysis. By analyzing the results of this investigation, we discovered that most of the trustworthiness factors are not generally available with information that is enough to make an objective assessment, although some factors have been ranked as very important by the respondents of our survey. To fill this gap, we defined a set of different proxy-measures to use whenever a factor cannot be directly assessed on the basis of readily available information. Moreover, some factors are not measurable if developers do not explicitly provide essential information. For instance, this happens for all factors that refer to countable data (e.g., the number of downloads cannot be evaluated in a reliable way if the development community does not publish it). Then, by taking into account the trustworthiness factors and the experience gained through the project analysis, we defined a Goal/Question/Metric (GQM[29]) model for trustworthiness, to identify the qualities and metrics that determine the perception of trustworthiness by users. In order to measure the metrics identified in the GQM model, we identified a set of tools. When possible, tools were obtained by adapting, extending, and integrating existing tools. Considering that most of metrics were not available via the selected tools, we developed MacXim, a static code analysis tool. The selected tools integrate a number of OSS tools that support the creation of a measurement plan, starting from the main actors\u2019 and stakeholders\u2019 objectives and goals (developer community, user community, business needs, specific users, etc.), down to the specific static and dynamic metrics that will need to be collected to fulfill the goals. To validate the GQM model and build quantitative models of perceived trustworthiness and reliability, we collected both subjective evaluations and objective measures on a sample of 22 Java and 22 C/C++ OSS products. Objective measures were collected by means of MacXim and the other identified tools while subjective evaluations were collected by means of more than 500 questionnaires. Specifically, the subjective evaluations concerned how users evaluate the trustworthiness, reliability and other qualities of OSS; objective measures concerned software attributes like size, complexity, modularity, and cohesion. Finally, we correlated the objective code measures to users\u2019 and developers\u2019 evaluations of OSS products. The result is a set of quantitative models that account for the dependence of the perceivable qualities of OSS on objectively observable qualities of the code. Unlike the models based on weighted sums usually available in the literature, we have obtained estimation models [87], so the relevant factors and their specific weights are identified via statistical analysis, and not in a somewhat more subjective way, as usually happens. Qualitatively, our results may not be totally surprising. For instance, it may be generally expected that bigger and more complex products are less trustworthy than smaller and simpler products; likewise, it is expected that well modularized products are more reliable. For instance, our analyses indicate that the OSS products are most likely to be trustworthy if: \u2022 Their size is not greater than 100,000 effective LOC; \u2022 The number of java packages is lower than 228. These models derived in our work can be used by end-users and developers that would like to evaluate the level of trustworthiness and reliability of existing OSS products and components they would like to use or reuse, based on measurable OSS code characteristics. These models can also be used by the developers of OSS products themselves, when setting code quality targets based on the level of trustworthiness and reliability they want to achieve. So, the information obtained via our models can be used as an additional piece of information that can be used when making informed decisions. Thus, unlike several discussions that are based on \u2013sometimes interested\u2013 opinions about the quality of OSS, this study aims at deriving statistically significant models that are based on repeatable measures and user evaluations provided by a reasonably large sample of OSS users. The detailed results are reported in the next sections as follows: \u2022Chapter 1 reports the introduction to this work \u2022Chapter 2 reports the related literature review \u2022Chapter 3 reports the identified trustworthiness factors \u2022Chapter 4 describe how we built the trustworthiness model \u2022Chapter 5 shows the tools we developed for this activity \u2022Chapter 6 reports on the experimentation phase \u2022Chapter 7 shows the results of the experimentation \u2022Chapter 8 draws conclusions and highlights future works \u2022Chapter 9 lists the publication made during the Ph

    Facilitating Scientometrics in Learning Analytics and Educational Data Mining – the LAK Dataset

    Get PDF
    The Learning Analytics and Knowledge (LAK) Dataset represents an unprecedented corpus which exposes a near complete collection of bibliographic resources for a specific research discipline, namely the connected areas of Learning Analytics and Educational Data Mining. Covering over five years of scientific literature from the most relevant conferences and journals, the dataset provides Linked Data about bibliographic metadata as well as full text of the paper body. The latter was enabled through special licensing agreements with ACM for publications not yet available through open access. The dataset has been designed following established Linked Data pattern, reusing established vocabularies and providing links to established schemas and entity coreferences in related datasets. Given the temporal and topic coverage of the dataset, being a near-complete corpus of research publications of a particular discipline, it facilitates scientometric investigations, for instance, about the evolution of a scientific field over time, or correlations with other disciplines, what is documented through its usage in a wide range of scientific studies and applications
    corecore